|
The Boosting Algorithms for Detector Cascade Learning〔P. Bartlett and M. Traskin. Adaboost is consistent. Journal of Machine Learning Research,8:2347–2368, December 2007.〕〔S. Brubaker, M. Mullin, and J. Rehg. On the design of cascades of boosted ensembles for face detection. International Journal of Computer Vision, 77:65–86, 2008.〕 is proposed by Mohammad Saberian and Nuno Vasconcelos〔(Boosting Algorithms for Detector Cascade Learning )〕 in 2014, it is based on Viola–Jones object detection framework.〔(Rapid object detection using a boosted cascade of simple features )〕 == Motivation of improvement == Paul Viola and Michael Jones propose great ideas in cascade learning classifier based on Ad- aboost. However, we can see, how to determine the number of stage in cascade classifier and the number of feature per stage to construct Adaboost classifier is hard to get. In Viola and Jones, Rapid Object Detection using a Boosted Cascade of Simple Features, just give a crude way to determine configuration: number of features in the first five layers of the detector is 1, 10, 25, 25 and 50 features respectively. The remaining layers have increasingly more features. It is an important topic discussed in ''Boosting Algorithms for Detector Cascade Learning''. In this work, they address the problem of automatically learning both the configuration and the stages of a high detection rate detector cascade. This is accomplished with the ''fast cascade boosting'' (FCBoost) algorithm, which is derived from a Lagrangian risk that trades-off detection performs and speed. FCBoost optimizes this risk with respect to a predictor that complies with the sequential decision making structure of the cascade architecture. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Cascade Learning Based on Adaboost」の詳細全文を読む スポンサード リンク
|